526 research outputs found

    FickleNet: Weakly and Semi-supervised Semantic Image Segmentation using Stochastic Inference

    Full text link
    The main obstacle to weakly supervised semantic image segmentation is the difficulty of obtaining pixel-level information from coarse image-level annotations. Most methods based on image-level annotations use localization maps obtained from the classifier, but these only focus on the small discriminative parts of objects and do not capture precise boundaries. FickleNet explores diverse combinations of locations on feature maps created by generic deep neural networks. It selects hidden units randomly and then uses them to obtain activation scores for image classification. FickleNet implicitly learns the coherence of each location in the feature maps, resulting in a localization map which identifies both discriminative and other parts of objects. The ensemble effects are obtained from a single network by selecting random hidden unit pairs, which means that a variety of localization maps are generated from a single image. Our approach does not require any additional training steps and only adds a simple layer to a standard convolutional neural network; nevertheless it outperforms recent comparable techniques on the Pascal VOC 2012 benchmark in both weakly and semi-supervised settings.Comment: To appear in CVPR 201

    Coevolutionary dynamics on scale-free networks

    Full text link
    We investigate Bak-Sneppen coevolution models on scale-free networks with various degree exponents γ\gamma including random networks. For γ>3\gamma >3, the critical fitness value fcf_c approaches to a nonzero finite value in the limit NN \to \infty, whereas fcf_c approaches to zero as 2<γ32<\gamma \le 3. These results are explained by showing analytically fc(N)A/Nf_c(N) \simeq A/_N on the networks with size NN. The avalanche size distribution P(s)P(s) shows the normal power-law behavior for γ>3\gamma >3. In contrast, P(s)P(s) for 2<γ32 <\gamma \le 3 has two power-law regimes. One is a short regime for small ss with a large exponent τ1\tau_1 and the other is a long regime for large ss with a small exponent τ2\tau_2 (τ1>τ2\tau_1 > \tau_2). The origin of the two power-regimes is explained by the dynamics on an artificially-made star-linked network.Comment: 5 pages, 5 figure

    Prominent Attribute Modification using Attribute Dependent Generative Adversarial Network

    Full text link
    Modifying the facial images with desired attributes is important, though challenging tasks in computer vision, where it aims to modify single or multiple attributes of the face image. Some of the existing methods are either based on attribute independent approaches where the modification is done in the latent representation or attribute dependent approaches. The attribute independent methods are limited in performance as they require the desired paired data for changing the desired attributes. Secondly, the attribute independent constraint may result in the loss of information and, hence, fail in generating the required attributes in the face image. In contrast, the attribute dependent approaches are effective as these approaches are capable of modifying the required features along with preserving the information in the given image. However, attribute dependent approaches are sensitive and require a careful model design in generating high-quality results. To address this problem, we propose an attribute dependent face modification approach. The proposed approach is based on two generators and two discriminators that utilize the binary as well as the real representation of the attributes and, in return, generate high-quality attribute modification results. Experiments on the CelebA dataset show that our method effectively performs the multiple attribute editing with preserving other facial details intactly
    corecore